Goto

Collaborating Authors

 North Sea


Bafta games awards 2025: full list of winners

The Guardian

In a video game year dominated by dark, bloody fantasy adventures – and continued job losses and studio closures – it was a cute robot that stole the night at the 2025 Bafta video game awards. Sony's family-friendly platformer Astro Bot won in five categories at yesterday evening's ceremony, including best game and game design. The rest of the awards were evenly spread across a range of Triple A and independent titles. Oil rig thriller Still Wakes the Deep was the next biggest winner with three awards: new intellectual property, performer in a leading role and performer in a supporting role. Clearly actors looking for Bafta-winning roles need look no further than the North Sea.


Deep Reinforcement Learning for Sim-to-Real Policy Transfer of VTOL-UAVs Offshore Docking Operations

arXiv.org Artificial Intelligence

This paper proposes a novel Reinforcement Learning (RL) approach for sim-to-real policy transfer of Vertical Take-Off and Landing Unmanned Aerial Vehicle (VTOL-UAV). The proposed approach is designed for VTOL-UAV landing on offshore docking stations in maritime operations. VTOL-UAVs in maritime operations encounter limitations in their operational range, primarily stemming from constraints imposed by their battery capacity. The concept of autonomous landing on a charging platform presents an intriguing prospect for mitigating these limitations by facilitating battery charging and data transfer. However, current Deep Reinforcement Learning (DRL) methods exhibit drawbacks, including lengthy training times, and modest success rates. In this paper, we tackle these concerns comprehensively by decomposing the landing procedure into a sequence of more manageable but analogous tasks in terms of an approach phase and a landing phase. The proposed architecture utilizes a model-based control scheme for the approach phase, where the VTOL-UAV is approaching the offshore docking station. In the Landing phase, DRL agents were trained offline to learn the optimal policy to dock on the offshore station. The Joint North Sea Wave Project (JONSWAP) spectrum model has been employed to create a wave model for each episode, enhancing policy generalization for sim2real transfer. A set of DRL algorithms have been tested through numerical simulations including value-based agents and policy-based agents such as Deep \textit{Q} Networks (DQN) and Proximal Policy Optimization (PPO) respectively. The numerical experiments show that the PPO agent can learn complicated and efficient policies to land in uncertain environments, which in turn enhances the likelihood of successful sim-to-real transfer.


Disentangling Heterogeneous Knowledge Concept Embedding for Cognitive Diagnosis on Untested Knowledge

arXiv.org Artificial Intelligence

Cognitive diagnosis is a fundamental and critical task in learning assessment, which aims to infer students' proficiency on knowledge concepts from their response logs. Current works assume each knowledge concept will certainly be tested and covered by multiple exercises. However, whether online or offline courses, it's hardly feasible to completely cover all knowledge concepts in several exercises. Restricted tests lead to undiscovered knowledge deficits, especially untested knowledge concepts(UKCs). In this paper, we propose a novel \underline{Dis}entangling Heterogeneous \underline{K}nowledge \underline{C}ognitive \underline{D}iagnosis framework on untested knowledge(DisKCD). Specifically, we leverage course grades, exercise questions, and resources to learn the potential representations of students, exercises, and knowledge concepts. In particular, knowledge concepts are disentangled into tested and untested based on the limiting actual exercises. We construct a heterogeneous relation graph network via students, exercises, tested knowledge concepts(TKCs), and UKCs. Then, through a hierarchical heterogeneous message-passing mechanism, the fine-grained relations are incorporated into the embeddings of the entities. Finally, the embeddings will be applied to multiple existing cognitive diagnosis models to infer students' proficiency on UKCs. Experimental results on real-world datasets show that the proposed model can effectively improve the performance of the task of diagnosing students' proficiency on UKCs. Our anonymous code is available at https://anonymous.4open.science/r/DisKCD.


Dutch tulip farm utilizes AI robot to slow the spread of plant disease

FOX News

The robot uses its chest, hips and arms to handle objects -- just like we do. Theo works weekdays, weekends and nights and never complains about a sore spine despite performing hour upon hour of what, for a regular farm hand, would be backbreaking labor checking Dutch tulip fields for sick flowers. The boxy robot -- named after a retired employee at the WAM Pennings farm near the Dutch North Sea coast -- is a new high-tech weapon in the battle to root out disease from the bulb fields as they erupt into a riot of springtime color. On a windy spring morning, the robot trundled Tuesday along rows of yellow and red "goudstuk" tulips, checking each plant and, when necessary, killing diseased bulbs to prevent the spread of the tulip-breaking virus. The dead bulbs are removed from healthy ones in a sorting warehouse after they have been harvested.


Active Learning for Abrupt Shifts Change-point Detection via Derivative-Aware Gaussian Processes

arXiv.org Artificial Intelligence

Change-point detection (CPD) is crucial for identifying abrupt shifts in data, which influence decision-making and efficient resource allocation across various domains. To address the challenges posed by the costly and time-intensive data acquisition in CPD, we introduce the Derivative-Aware Change Detection (DACD) method. It leverages the derivative process of a Gaussian process (GP) for Active Learning (AL), aiming to pinpoint change-point locations effectively. DACD balances the exploitation and exploration of derivative processes through multiple data acquisition functions (AFs). By utilizing GP derivative mean and variance as criteria, DACD sequentially selects the next sampling data point, thus enhancing algorithmic efficiency and ensuring reliable and accurate results. We investigate the effectiveness of DACD method in diverse scenarios and show it outperforms other active learning change-point detection approaches.


Closing the Loop: A Framework for Trustworthy Machine Learning in Power Systems

arXiv.org Artificial Intelligence

Deep decarbonization of the energy sector will require massive penetration of stochastic renewable energy resources and an enormous amount of grid asset coordination; this represents a challenging paradigm for the power system operators who are tasked with maintaining grid stability and security in the face of such changes. With its ability to learn from complex datasets and provide predictive solutions on fast timescales, machine learning (ML) is well-posed to help overcome these challenges as power systems transform in the coming decades. In this work, we outline five key challenges (dataset generation, data pre-processing, model training, model assessment, and model embedding) associated with building trustworthy ML models which learn from physics-based simulation data. We then demonstrate how linking together individual modules, each of which overcomes a respective challenge, at sequential stages in the machine learning pipeline can help enhance the overall performance of the training process. In particular, we implement methods that connect different elements of the learning pipeline through feedback, thus "closing the loop" between model training, performance assessments, and re-training. We demonstrate the effectiveness of this framework, its constituent modules, and its feedback connections by learning the N-1 small-signal stability margin associated with a detailed model of a proposed North Sea Wind Power Hub system.


How robots can help build offshore wind turbines more quickly

The Japan Times

The invasion of Ukraine has put the U.S. and Europe on a wartime mission to abandon Russian fossil fuels. This series looks at speeding up zero-carbon alternatives by lowering political and financial barriers. Sign up here to get the next story sent to your inbox. Trying to attach a million-dollar, 60-ton wind turbine blade to its base is challenging in any circumstance -- getting the angle wrong by even a fraction of a degree could affect the machine's ability to generate power. Now imagine trying to do it in the middle of the North Sea, one of the world's windiest spots, with waves swelling around you. It's like tying a thread to a kite at the beach and then trying to put it through the eye of a needle.


'L' is the robot's name. Pepper picking is its game.

Mashable

It all started when a piece of his skull was dredged up from the North Sea.


Perfecting self-driving cars – can it be done?

Robohub

Robotic vehicles have been used in dangerous environments for decades, from decommissioning the Fukushima nuclear power plant or inspecting underwater energy infrastructure in the North Sea. More recently, autonomous vehicles from boats to grocery delivery carts have made the gentle transition from research centres into the real world with very few hiccups. Yet the promised arrival of self-driving cars has not progressed beyond the testing stage. And in one test drive of an Uber self-driving car in 2018, a pedestrian was killed by the vehicle. Although these accidents happen every day when humans are behind the wheel, the public holds driverless cars to far higher safety standards, interpreting one-off accidents as proof that these vehicles are too unsafe to unleash on public roads.


An introduction to distributed training of deep neural networks for segmentation tasks with large seismic datasets

arXiv.org Artificial Intelligence

Deep learning applications are drastically progressing in seismic processing and interpretation tasks. However, the majority of approaches subsample data volumes and restrict model sizes to minimise computational requirements. Subsampling the data risks losing vital spatio-temporal information which could aid training whilst restricting model sizes can impact model performance, or in some extreme cases, renders more complicated tasks such as segmentation impossible. This paper illustrates how to tackle the two main issues of training of large neural networks: memory limitations and impracticably large training times. Typically, training data is preloaded into memory prior to training, a particular challenge for seismic applications where data is typically four times larger than that used for standard image processing tasks (float32 vs. uint8). Using a microseismic use case, we illustrate how over 750GB of data can be used to train a model by using a data generator approach which only stores in memory the data required for that training batch. Furthermore, efficient training over large models is illustrated through the training of a 7-layer UNet with input data dimensions of 4096 4096 ( 7.8M parameters). Through a batch-splitting distributed training approach, training times are reduced by a factor of four. The combination of data generators and distributed training removes any necessity of data subsampling or restriction of neural network sizes, offering the opportunity of utilisation of larger networks, higher-resolution input data or moving from 2D to 3D problem spaces.